Close

%0 Conference Proceedings
%4 sid.inpe.br/sibgrapi/2018/08.28.03.44
%2 sid.inpe.br/sibgrapi/2018/08.28.03.44.14
%@doi 10.1109/SIBGRAPI.2018.00064
%T Bag of attributes for video event retrieval
%D 2018
%A Duarte, Leonardo Assuane,
%A Penatti, Otávio Augusto Bizetto,
%A Almeida, Jurandy,
%@affiliation Universidade Federal de São Paulo - UNIFESP
%@affiliation SAMSUNG Research Institute
%@affiliation Universidade Federal de São Paulo - UNIFESP
%E Ross, Arun,
%E Gastal, Eduardo S. L.,
%E Jorge, Joaquim A.,
%E Queiroz, Ricardo L. de,
%E Minetto, Rodrigo,
%E Sarkar, Sudeep,
%E Papa, João Paulo,
%E Oliveira, Manuel M.,
%E Arbeláez, Pablo,
%E Mery, Domingo,
%E Oliveira, Maria Cristina Ferreira de,
%E Spina, Thiago Vallin,
%E Mendes, Caroline Mazetto,
%E Costa, Henrique Sérgio Gutierrez,
%E Mejail, Marta Estela,
%E Geus, Klaus de,
%E Scheer, Sergio,
%B Conference on Graphics, Patterns and Images, 31 (SIBGRAPI)
%C Foz do Iguaçu, PR, Brazil
%8 29 Oct.-1 Nov. 2018
%I IEEE Computer Society
%J Los Alamitos
%S Proceedings
%K video event retrieval, video representation, visual dictionaries, semantics.
%X In this paper, we present the Bag-of-Attributes (BoA) model for video representation aiming at video event retrieval. The BoA model is based on a semantic feature space for representing videos, resulting in high-level video feature vectors. For creating a semantic space, i.e., the attribute space, we can train a classifier using a labeled image dataset, obtaining a classification model that can be understood as a high-level codebook. This model is used to map low-level frame vectors into high-level vectors (e.g., classifier probability scores). Then, we apply pooling operations to the frame vectors to create the final bag of attributes for the video. In the BoA representation, each dimension corresponds to one category (or attribute) of the semantic space. Other interesting properties are: compactness, flexibility regarding the classifier, and ability to encode multiple semantic concepts in a single video representation. Our experiments considered the semantic space created by state-of-the-art convolutional neural networks pre-trained on 1000 object categories of ImageNet. Such deep neural networks were used to classify each video frame and then different coding strategies were used to encode the probability distribution from the softmax layer into a frame vector. Next, different pooling strategies were used to combine frame vectors in the BoA representation for a video. Results using BoA were comparable or superior to the baselines in the task of video event retrieval using the EVVE dataset, with the advantage of providing a much more compact representation.
%@language en
%3 59paper.pdf


Close